241 research outputs found

    Efficient Algorithms for the Closest Pair Problem and Applications

    Full text link
    The closest pair problem (CPP) is one of the well studied and fundamental problems in computing. Given a set of points in a metric space, the problem is to identify the pair of closest points. Another closely related problem is the fixed radius nearest neighbors problem (FRNNP). Given a set of points and a radius RR, the problem is, for every input point pp, to identify all the other input points that are within a distance of RR from pp. A naive deterministic algorithm can solve these problems in quadratic time. CPP as well as FRNNP play a vital role in computational biology, computational finance, share market analysis, weather prediction, entomology, electro cardiograph, N-body simulations, molecular simulations, etc. As a result, any improvements made in solving CPP and FRNNP will have immediate implications for the solution of numerous problems in these domains. We live in an era of big data and processing these data take large amounts of time. Speeding up data processing algorithms is thus much more essential now than ever before. In this paper we present algorithms for CPP and FRNNP that improve (in theory and/or practice) the best-known algorithms reported in the literature for CPP and FRNNP. These algorithms also improve the best-known algorithms for related applications including time series motif mining and the two locus problem in Genome Wide Association Studies (GWAS)

    An Elegant Algorithm for the Construction of Suffix Arrays

    Get PDF
    The suffix array is a data structure that finds numerous applications in string processing problems for both linguistic texts and biological data. It has been introduced as a memory efficient alternative for suffix trees. The suffix array consists of the sorted suffixes of a string. There are several linear time suffix array construction algorithms (SACAs) known in the literature. However, one of the fastest algorithms in practice has a worst case run time of O(n2)O(n^2). The problem of designing practically and theoretically efficient techniques remains open. In this paper we present an elegant algorithm for suffix array construction which takes linear time with high probability; the probability is on the space of all possible inputs. Our algorithm is one of the simplest of the known SACAs and it opens up a new dimension of suffix array construction that has not been explored until now. Our algorithm is easily parallelizable. We offer parallel implementations on various parallel models of computing. We prove a lemma on the \ell-mers of a random string which might find independent applications. We also present another algorithm that utilizes the above algorithm. This algorithm is called RadixSA and has a worst case run time of O(nlogn)O(n\log{n}). RadixSA introduces an idea that may find independent applications as a speedup technique for other SACAs. An empirical comparison of RadixSA with other algorithms on various datasets reveals that our algorithm is one of the fastest algorithms to date. The C++ source code is freely available at http://www.engr.uconn.edu/~man09004/radixSA.zi

    On the Convergence Time of Simulated Annealing

    Get PDF
    Simulated Annealing is a family of randomized algorithms used to solve many combinatorial optimization problems. In practice they have been applied to solve some presumably hard (e.g., NP-complete) problems. The level of performance obtained has been promised [5, 2, 6, 14]. The success of its heuristic technique has motivated analysis of this algorithm from a theoretical point of view. In particularly, people have looked at the convergence of this algorithm. They have shown (see e.g., [10]) that this algorithm converges in the limit to a globally optimal solution with probability 1. However few of these convergence results specify a time limit within which the algorithm is guaranteed to converge(with some high probability, say). We present, for the first time, a simple analysis of SA that will provide a time bound for convergence with overwhelming probability. The analysis will hold no matter what annealing schedule is used. Convergence of Simulated Annealing in the limit will follow as a corollary to our time convergence proof. In this paper we also look at optimization problems for which the cost function has some special properties. We prove that for these problems the convergence is much faster. In particular, we give a simpler and more general proof of convergence for Nested Annealing, a heuristic algorithm developed in [12]. Nested Annealing is based on defining a graph corresponding to the given optimization problem. If this graph is \u27small separable\u27, they [12] show that Nested Annealing will converge \u27faster\u27. For arbitrary optimization problem, we may not have any knowledge about the \u27separability\u27 of its graph. In this paper we give tight bounds for the \u27separability\u27 of a random graph. We then use these bounds to analyze the expected behavior of Nested Annealing on an arbitrary optimization problem. The \u27separability\u27 bounds we derive in this paper are of independent interest and have the potential of finding other applications

    Randomized Parallel Selection

    Get PDF
    We show that selection on an input of size N can be performed on a P-node hypercube (P = N/(log N)) in time O(n/P) with high probability, provided each node can process all the incident edges in one unit of time (this model is called the parallel model and has been assumed by previous researchers (e.g.,[17])). This result is important in view of a lower bound of Plaxton that implies selection takes Ω((N/P)loglog P+log P) time on a P-node hypercube if each node can process only one edge at a time (this model is referred to as the sequential model)

    Randomized Algorithms For Packet Routing on the Mesh

    Get PDF
    Packet routing is an important problem of parallel computing since a fast algorithm for packet routing will imply 1) fast inter-processor communication, and 2) fast algorithms for emulating ideal models like PRAMs on fixed connection machines.There are three different models of packet routing, namely 1) Store and forward, 2) Multipacket, and 3) Cut through. In this paper we provide a survey of the best known randomized algorithms for store and forward routing, k-k routing, and cut through routing on the Mesh Connected Computers

    \u3cem\u3ek-k\u3c/em\u3e Routing, \u3cem\u3ek-k\u3c/em\u3e Sorting, and Cut Through Routing on the Mesh

    Get PDF
    In this paper we present randomized algorithms for k-k routing, k-k sorting, and cut through routing. The stated resource bounds hold with high probability. The algorithm for k-k routing runs in [k/2]n+o(kn) steps. We also show that k-k sorting can be accomplished within [k/2] n+n+o(kn) steps, and cut through routing can be done in [3/4]kn+[3/2]n+o(kn) steps. The best known time bounds (prior to this paper) for all these three problems were kn+o(kn). [kn/2] is a known lower bound for all the three problems (which is the bisection bound), and hence our algorithms are very nearly optimal. All the above mentioned algorithms have optimal queue length, namely k+o(k). These algorithms also extend to higher dimensional meshes

    Mesh Connected Computers With Multiple Fixed Buses: Packet Routing, Sorting and Selection

    Get PDF
    Mesh connected computers have become attractive models of computing because of their varied special features. In this paper we consider two variations of the mesh model: 1) a mesh with fixed buses, and 2) a mesh with reconfigurable buses. Both these models have been the subject matter of extensive previous research. We solve numerous important problems related to packet routing, sorting, and selection on these models. In particular, we provide lower bounds and very nearly matching upper bounds for the following problems on both these models: 1) Routing on a linear array; and 2) k-k routing, k-k sorting, and cut through routing on a 2D mesh for any k ≥ 12. We provide an improved algorithm for 1-1 routing and a matching sorting algorithm. In addition we present greedy algorithms for 1-1 routing, k-k routing, cut through routing, and k-k sorting that are better on average and supply matching lower bounds. We also show that sorting can be performed in logarithmic time on a mesh with fixed buses. As a consequence we present an optimal randomized selection algorithm. In addition we provide a selection algorithm for the mesh with reconfigurable buses whose time bound is significantly better than the existing ones. Our algorithms have considerably better time bounds than many existing best known algorithms
    corecore